25 research outputs found

    Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories

    Full text link
    One of the new developments in chit-chat bots is a long-term memory mechanism that remembers information from past conversations for increasing engagement and consistency of responses. The bot is designed to extract knowledge of personal nature from their conversation partner, e.g., stating preference for a particular color. In this paper, we show that this memory mechanism can result in unintended behavior. In particular, we found that one can combine a personal statement with an informative statement that would lead the bot to remember the informative statement alongside personal knowledge in its long term memory. This means that the bot can be tricked into remembering misinformation which it would regurgitate as statements of fact when recalling information relevant to the topic of conversation. We demonstrate this vulnerability on the BlenderBot 2 framework implemented on the ParlAI platform and provide examples on the more recent and significantly larger BlenderBot 3 model. We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement. We further assessed the risk of this misinformation being recalled after intervening innocuous conversation and in response to multiple questions relevant to the injected memory. Our evaluation was performed on both the memory-only and the combination of memory and internet search modes of BlenderBot 2. From the combinations of these variables, we generated 12,890 conversations and analyzed recalled misinformation in the responses. We found that when the chat bot is questioned on the misinformation topic, it was 328% more likely to respond with the misinformation as fact when the misinformation was in the long-term memory.Comment: To be published in 21st International Conference on Applied Cryptography and Network Security, ACNS 202

    On the Resilience of Biometric Authentication Systems against Random Inputs

    Full text link
    We assess the security of machine learning based biometric authentication systems against an attacker who submits uniform random inputs, either as feature vectors or raw inputs, in order to find an accepting sample of a target user. The average false positive rate (FPR) of the system, i.e., the rate at which an impostor is incorrectly accepted as the legitimate user, may be interpreted as a measure of the success probability of such an attack. However, we show that the success rate is often higher than the FPR. In particular, for one reconstructed biometric system with an average FPR of 0.03, the success rate was as high as 0.78. This has implications for the security of the system, as an attacker with only the knowledge of the length of the feature space can impersonate the user with less than 2 attempts on average. We provide detailed analysis of why the attack is successful, and validate our results using four different biometric modalities and four different machine learning classifiers. Finally, we propose mitigation techniques that render such attacks ineffective, with little to no effect on the accuracy of the system.Comment: Accepted by NDSS2020, 18 page

    Unintended Memorization and Timing Attacks in Named Entity Recognition Models

    Full text link
    Named entity recognition models (NER), are widely used for identifying named entities (e.g., individuals, locations, and other information) in text documents. Machine learning based NER models are increasingly being applied in privacy-sensitive applications that need automatic and scalable identification of sensitive information to redact text for data sharing. In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets. With updated pre-trained NER models from spaCy, we demonstrate two distinct membership attacks on these models. Our first attack capitalizes on unintended memorization in the NER's underlying neural network, a phenomenon NNs are known to be vulnerable to. Our second attack leverages a timing side-channel to target NER models that maintain vocabularies constructed from the training data. We show that different functional paths of words within the training dataset in contrast to words not previously seen have measurable differences in execution time. Revealing membership status of training samples has clear privacy implications, e.g., in text redaction, sensitive words or phrases to be found and removed, are at risk of being detected in the training dataset. Our experimental evaluation includes the redaction of both password and health data, presenting both security risks and privacy/regulatory issues. This is exacerbated by results that show memorization with only a single phrase. We achieved 70% AUC in our first attack on a text redaction use-case. We also show overwhelming success in the timing attack with 99.23% AUC. Finally we discuss potential mitigation approaches to realize the safe use of NER models in light of the privacy and security implications of membership inference attacks.Comment: This is the full version of the paper with the same title accepted for publication in the Proceedings of the 23rd Privacy Enhancing Technologies Symposium, PETS 202

    Use of Cryptography in Malware Obfuscation

    Full text link
    Malware authors often use cryptographic tools such as XOR encryption and block ciphers like AES to obfuscate part of the malware to evade detection. Use of cryptography may give the impression that these obfuscation techniques have some provable guarantees of success. In this paper, we take a closer look at the use of cryptographic tools to obfuscate malware. We first find that most techniques are easy to defeat (in principle), since the decryption algorithm and the key is shipped within the program. In order to clearly define an obfuscation technique's potential to evade detection we propose a principled definition of malware obfuscation, and then categorize instances of malware obfuscation that use cryptographic tools into those which evade detection and those which are detectable. We find that schemes that are hard to de-obfuscate necessarily rely on a construct based on environmental keying. We also show that cryptographic notions of obfuscation, e.g., indistinghuishability and virtual black box obfuscation, may not guarantee evasion detection under our model. However, they can be used in conjunction with environmental keying to produce hard to de-obfuscate versions of programs
    corecore